8 research outputs found

    Meta-QoS performance of earliest-deadline-first and rate-monotonic scheduling of smoothed video data in a client-server environment

    Get PDF
    In this paper we present an extensive performance study of two modified EDF and RM scheduling algorithms which are enhanced to provide quality of service (QoS) guarantees for smoothed video data. With a probabilistic definition of QoS, we incorporate admission control conditions into the two algorithms. Furthermore, we also include a counter-based scheduling module as the core scheduling mechanism which adaptively adjusts the actual QoS levels assigned to requests. Our theoretical analysis of the two enhanced algorithms, called QEDF and QRM, shows that the QRM algorithm is more robust than the QEDF algorithm for different workload and utilization conditions. We also propose to use a new metric called meta-QoS to quantify the overall performance of a packet scheduler given a set of simultaneous requests. In our experiments, we find that the QRM algorithm can sustain a rather stable level of meta-QoS even when the workload and utilization levels are increased. On the other hand, the QEDF algorithm is found to be less desirable for a high level of utilization and a large number of requests.published_or_final_versio

    Quasi-static cluster-computing approach for dynamic channel assignment in cellular mobile communication systems

    No full text
    Efficient management of the radio spectrum can be accomplished by making use of channel assignment techniques, which work by allocating different channels of the spectrum to the cells of the network in a conflict-free manner (i.e., the co-channel interference is minimized). The problem of dynamically reallocating the channels in response to change in user location patterns, which occurs frequently for a micro-cell network architecture, is even more difficult to tackle in a timely manner. Most existing approaches use various sequential search based heuristics which cannot produce high quality allocation fast enough to cope with the frequent traffic requirement variations. In this paper, we propose a quasi-static approach which combines the merits of both static and dynamic schemes. The static component of our approach uses a parallel genetic algorithm to generate a suite of representative assignments based on a set of different estimated traffic scenarios. At on-line time, the dynamic component observes the actual traffic requirement and retrieve the representative assignment of the closest scenario from the off-line table. The retrieved assignment is then quickly refined by using a fast parallel local search algorithm. Our extensive simulation experiments have indicated that the proposed quasi-static system outperforms other dynamic channel assignment techniques significantly in terms of both blocking probabilities and computational overhead.link_to_subscribed_fulltex

    Parallel algorithm for compile-time scheduling of parallel programs on multiprocessors

    No full text
    In this paper, we propose a parallel randomized algorithm, called Parallel Fast Assignment using Search Technique (PFAST), for scheduling parallel programs represented by directed acyclic graphs (DAGs) during compile-time. The PFAST algorithm has O(e) time complexity where e is the number of edges in the DAG. This linear-time algorithm works by first generating an initial solution and then refining it using a parallel random search. Using a prototype computer-aided parallelization and scheduling tool called CASCH, the algorithm is found to outperform numerous previous algorithms while taking dramatically smaller execution times. The distinctive feature of this research is that, instead of simulations, our proposed algorithm is evaluated and compared with other algorithms using the CASCH tool with real applications running on the Intel Paragon. The PFAST algorithm is also evaluated with randomly generated DAGs for which optimal schedules are known. The algorithm generated optimal solutions for a majority of the test cases and close-to-optimal solutions for the others. The proposed algorithm is the fastest scheduling algorithm known to us and is an attractive choice for scheduling under running time constraints.link_to_subscribed_fulltex

    Exploiting duplication to minimize the execution times of parallel programs on message-passing systems

    No full text
    Communication overhead is one of the main factors that can limit the speedup of parallel programs on message-passing parallel architectures. This limiting factor is more predominant in distributed systems such as clusters of homogeneous or heterogeneous workstations. However, excessive communication overhead can be reduced by redundantly executing some of the tasks of a parallel program on which other tasks critically depend. In this paper, we study the problem of duplication-based static scheduling of parallel programs on parallel and distributed systems. Previous duplication-based scheduling algorithms assumed the availability of unlimited number of homogeneous processors. In this paper, we consider more practical scenarios: when the number of processors is limited, and when the system consists of heterogeneous computers. For the first scenario, we propose an algorithm which minimizes the execution of a parallel program by controlling the level of duplication according to the number of processors available. For the second scenario, we design an algorithm which simultaneously exploits duplication and processor heterogeneity to minimize the total execution time of a parallel program. The proposed algorithms are suitable for low as well as high communication-to-computation ratios.link_to_subscribed_fulltex

    Parallel approach for multiprocessor scheduling

    No full text
    The objective of this research is to propose a low-complexity static scheduling and allocation algorithm for message-passing architectures by considering factors such as communication delays, link contention, message routing and network topology. As opposed to the conventional list-scheduling approach, our technique works by first serializing the task graph and 'injecting' all the tasks to one processor. The parallel tasks are then 'bubbled up' to other processors and are inserted at appropriate time slots. The edges among the tasks are also scheduled by treating communication links between the processors as resources. The proposed approach takes into account the link contention and underlying communication routing strategy, and can self-adjust on regular as well as arbitrary network topologies. To reduce the complexity, our scheduling algorithm is itself parallelized. To our knowledge, this is the first attempt in designing a parallel algorithm for scheduling. The proposed approach implemented on an iPSC/860 hypercube, while yielding a high speedup in its execution, performs considerably better under a wide range of parameters including the task graph size, communication-to-computation ratio, and the target system topology. Comparisons are made with two other approaches.link_to_subscribed_fulltex

    Bubble scheduling: a quasi dynamic algorithm for static allocation of tasks to parallel architectures

    No full text
    We propose an algorithm for scheduling and allocation of parallel programs to message-passing architectures. The algorithm considers arbitrary computation and communication costs, arbitrary network topology, link contention and underlying communication routing strategy. While our technique is static, the algorithm is quasi dynamic because it is not specific to any particular system topology and thus can be used at run-time for the processor configuration available at that time. The proposed algorithm, called Bubble Scheduling and Allocation (BSA) algorithm, works by first serializing the task graph and 'injecting' all the tasks to one processor. The parallel tasks are then 'bubbled up' to other processors and are inserted at appropriate time slots. The edges among the tasks are also scheduled by treating communication links between the processors as resources. The scheduling of messages on the links depends on the routing strategy, such as circuit switching and wormhole routing, of the underlying network. The proposed algorithm has admissible time complexity and is suitable for regular as well as irregular task graph structures.link_to_subscribed_fulltex

    Performance study of multiple access control protocols for wireless multimedia services

    No full text
    The multiple access control (MAC) problem in a wireless network has intrigued researchers for years. For a broadband wireless multimedia network such as wireless ATM, an effective MAC protocol is very much desired because efficient allocation of channel bandwidth is imperative in accommodating a large user population with satisfactory quality of service. Indeed, MAC protocols for a wireless ATM network, in which user traffic requirements are highly heterogeneous (classified into CBR, VBR, and ABR), are even more intricate to design. Considerable research efforts expended in tackling the problem have resulted in a myriad of MAC protocols. While each protocol is individually shown to be effective by the respective designers, it is unclear how these different protocols compare against each other on a unified basis. In this paper, we quantitatively compare seven recently proposed TDMA-based MAC protocols for integrated wireless data and voice services. We first propose a taxonomy of TDMA-based protocols, from which we carefully select seven protocols, namely SCAMA, DTDMA/VR, DTDMA/PR, DQRUMA, DPRMA, DSA++, and PRMA/DA, such that they are devised based on rather orthogonal design philosophies. The objective of our comparison is to highlight the merits and demerits of different protocol designs.link_to_subscribed_fulltex

    Analysis, evaluation, and comparison of algorithms for scheduling task graphs on parallel processors

    No full text
    In this paper, we survey algorithms that allocate a parallel program represented by an edge-weighted directed acyclic graph (DAG), also called a task graph or macro-dataflow graph, to a set of homogeneous processors, with the objective of minimizing the completion time. We analyze 21 such algorithms and classify them into four groups. The first group includes algorithms that schedule the DAG to a bounded number of processors directly. These algorithms are called the bounded number of processors (BNP) scheduling algorithms. The algorithms in the second group schedule the DAG to an unbounded number of clusters and are called the unbounded number of clusters (UNC) scheduling algorithms. The algorithms in the third group schedule the DAG using task duplication and are called the task duplication based (TDB) scheduling algorithms. The algorithms in the fourth group perform allocation and mapping on arbitrary processor network topologies. These algorithms are called the arbitrary processor network (APN) scheduling algorithms. The design philosophies and principles behind these algorithms are discussed, and the performance of all of the algorithms is evaluated and compared against each other on a unified basis by using various scheduling parameters.link_to_subscribed_fulltex
    corecore